Support ServiceWorkers in Private Browsing Mode
Categories
(Core :: DOM: Service Workers, enhancement, P3)
Tracking
()
People
(Reporter: zachlym, Assigned: asuth)
References
(Depends on 1 open bug, Blocks 1 open bug)
Details
(Keywords: parity-chrome, parity-safari, webcompat:platform-bug, Whiteboard: webcompat:risk-moderate )
User Story
platform-scheduled:2025-01-30
Attachments
(5 files)
Comment 1•9 years ago
|
||
Comment 2•9 years ago
|
||
Updated•9 years ago
|
Assignee | ||
Comment 7•4 years ago
|
||
The tentative plan is indeed to use temporary disk storage that's encrypted so that in the event of a browser crash the data will remain unreadable. We're starting with IndexedDB support for this mode of operation and bug 1639542 is the meta-bug for that. This is a longer term effort.
Assignee | ||
Updated•4 years ago
|
Comment hidden (advocacy) |
Comment 9•4 years ago
|
||
Just chiming in with another motivating example: VS Code. On web, VS Code uses service workers sort of like virtual servers which power our webviews, so they are a core part of our product and not just a nice to have capability.
You can see this issue in VS Code for web by following these steps:
- In a private window
- Go to https://vscode-web-test-playground.azurewebsites.net
- Open the image file in the explore (file.jpg) and see that nothing is shown. The root cause is that service workers are unavailable.
We don't believe there is a workaround for cases were service workers are unavailable, so we instead show an error message to users suggesting that they switch out of private/incognito mode
Comment 10•4 years ago
|
||
Adding another example, quite similar to the VSCode one in the previous comment.
For https://github.com/google/playground-elements (used at e.g. https://lit.dev/playground/) we use Service Workers to host a virtual server for live previews of code written by the user.
One of the advantages to using a Service Worker for this kind of use case, as opposed to a traditional backend, is that it doesn't require any of the user's input to leave the browser. It's unfortunate (and a bit ironic) that we may need to implement a fallback traditional server that would require sharing more user data, only for the incognito case.
Comment 11•4 years ago
|
||
Also note https://bugzilla.mozilla.org/show_bug.cgi?id=1601916 appears to be a duplicate of this issue.
Comment 14•3 years ago
|
||
This is possibly related to https://bugzilla.mozilla.org/show_bug.cgi?id=1755167
In non-private mode, when the "Delete cookies and site data when Firefox is closed" is checked, service workers also fail to register (though curiously, navigator.serviceWorker is still present).
Comment 15•3 years ago
|
||
This is an important feature: service workers are everywhere now, the lack of it in private mode makes firefox less relevant for browsing.
For OIDC authentication, service workers are a pretty secure solution (probably the most secure one), and this bug is really getting in the way.
Can the priority be raised?
Comment 16•3 years ago
|
||
FYI, there is now a "known issue" added for Firefox at https://caniuse.com/serviceworkers . This is a serious issue, SWs are used on a lot of sites.
Updated•3 years ago
|
Comment 17•3 years ago
|
||
Hello, my 2 cents, I want my users to use Cache API so they can store some personal data on their device instead of storing those informations on my server. (which means more privacy and control for them)
It works fine on Chrome/Edge/Safari private modes even if it less performant than in regular mode as the cache needs to be rebuild from scratch every time they close the site.
But with Firefox, it does not work at all and I don't want to use localstorage which is too limited.
So for now, I advise my users to use another browser if they want private mode (and explain them what private mode is about)
Assignee | ||
Comment 18•2 years ago
|
||
We will need bug 1837276 addressed in order to ensure that all ServiceWorkers are torn down promptly. And in fact, if we don't add coverage in that bug that verifies that the Service Workers have all been terminated (via non-public-WPT because of internal checks), we should add it here or in a specific SW PB bug to ensure that the globals have ben torn down as part of the process.
While there notionally exist a few content-space mechanisms for detecting the termination of a global (WebLocks, errors/closures propagated through streams), they aren't suitable for the PB case because in general we'd also be tearing down any other global in the PB OriginAttribute transitive closure graph, plus potentially explicitly tearing down the origin in the PBackground "daemons" that might coordinate (ex: WebLocks). Probably the most practical approach is to:
- create a system JS helper that can provide a defined contract but allows us to change the implementation later on to be more efficient
- The most comprehensive approach right now might be to cause system JS code to be run in every process that enumerates all the available nsIWorkerDebuggers and ensures that none exist with a PB principal.
- A more efficient thing we could do is rely on the RemoteWorker logic to confirm that there are no live RemoteWorkers with a PB OriginAttribute. However, that logic is authoritative on PBackground, not the main thread right now (although I think we largely do want to change this).
- But a proxy for both things could just be to assert that there are no content processes alive covering a PB originattribute. Because the RemoteWorker infrastructure keeps the content processes alive while it wants to keep a worker alive, verifying there are none of these processes effectively verifies that the RemoteWorker infrastructure did tear down all of the workers in question.
- Note that this is orthogonal to the structural concern about race conditions related to nsIClearDataService and epochs I raise in https://bugzilla.mozilla.org/show_bug.cgi?id=1776271#c5 (and previously). StorageKey would continue to be our plan to address this in Workers & Storage space.
Comment 19•2 years ago
|
||
Not too important, but another use case is that this removes another vector used to detect private browsing and interfere with/based on user preferences.
Comment 20•2 years ago
|
||
Hello,
I need this feature to be able to run automated tests on Firefox.
Some features of our product won't work without it.
Thank you!
Updated•1 year ago
|
Assignee | ||
Updated•1 year ago
|
Assignee | ||
Comment 21•1 year ago
|
||
To most directly run the tests, run:
./mach mochitest --headless -f plain --tag dom-serviceworkers-api-private dom/serviceworkers/test/
Current notes:
test_bad_script_cache.html
fails because ourcreateChromeCache
hack does not work and will need to be updated, but this is not
general ServiceWorker operation.
Updated•6 months ago
|
Updated•6 months ago
|
Updated•6 months ago
|
Updated•6 months ago
|
Assignee | ||
Comment 22•5 months ago
|
||
During ServiceWorkers in PBM testing it was identified that the call to
EnsureNSSInitializedChromeOrContent made by ContentChild::PreallocInit
was load-bearing for DecryptingInputStream when used to decrypt the
ServiceWorker script because DecryptingInputStream did not ensure NSS
was initialized itself. Specifically, when a freshly spawned content
process was used without first telling it to be a preallocated process,
the ServiceWorker script would fail to load because DIS would provide a
0-length script due to NSS not being initialized.
This patch ensures that our encrypting and decrypting streams make sure
NSS is initialized. Most of the time the fast path will be taken.
Assignee | ||
Comment 23•5 months ago
|
||
When an encrypted Blob is used as a fetch POST payload to a controlled
ServiceWorker that does not handle the fetch event:
- The encrypted Blob will be read once. The base FileBlobImpl requests
that the file be opened withCLOSE_ON_EOF
andREOPEN_ON_REWIND
which means that at the end of the first consumption, the file handle
will be closed.- These behavioral bits are dropped when the file stream is serialized
over IPC because the serialized file descriptor cannot be reopened
by the file stream in the content process. (Obviously, someone
could ask for a new FD, but the file stream itself cannot.) - That means this behavior only happens in the parent process.
- The POST in this case in fact does consume the Blob in the parent
process. Content JS code would not experience this if consuming the
Blob itself.
- These behavioral bits are dropped when the file stream is serialized
- Because the SW does not handle the fetch event, interception will be
reset and the intercepted channel redirected to a replacement
nsHttpChannel which will want to POST the blob payload itself. This
means it will seek the stream back to the beginning. - DecryptingInputStream currently calls Tell on the underlying stream
in its Seek implementation so it can restore all state in the event
of a problem. But it does not handle the base stream reporting the
stream is closed, as it will do in the above case, even though if
seek is called the stream would be reopened. - This means the POST to the server fails currently because the call to
seek fails because the call to the underlying stream's Tell method has
its error propagated.
This patch mitigates the problem by handling a report of the base stream
being closed for a request to seek to the start of the stream by
seeking the underlying stream to its start and then re-checking the
error. If the underlying stream is still closed (as is the case for any
stream that does not have REOPEN_ON_REWIND
semantics), we will still
propagate the closed error. But in this situation, we will allow the
REOPEN_ON_REWIND
semantics to operate correctly.
Note that there are a bunch of existing "XXX" comments in the
DecryptingInputStream about not calling Seek/Tell that this patch does
not currently try to address, but could help clean this all up.
The ServiceWorkers test_file_blob_upload.js
test provides coverage
when run under PBM. Specifically, the helper
file_blob_upload_frame.html
creates an IDB-stored Blob and then
performs a fetch POST of the Blob through its controlling ServiceWorker
which does not handle the fetch.
Updated•3 months ago
|
Updated•3 months ago
|
Updated•3 months ago
|
Assignee | ||
Comment 24•3 months ago
|
||
Comment 25•3 months ago
|
||
Comment 26•3 months ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/f93ed49dbc28
https://hg.mozilla.org/mozilla-central/rev/965e5db0af1b
https://hg.mozilla.org/mozilla-central/rev/a54afb750109
https://hg.mozilla.org/mozilla-central/rev/f9fd810e029c
https://hg.mozilla.org/mozilla-central/rev/1e583de2110f
Comment 27•3 months ago
|
||
Is that worth mentioning in our general release notes or on the developer release notes?
Assignee | ||
Comment 28•3 months ago
|
||
(In reply to Pascal Chevrel:pascalc from comment #27)
Is that worth mentioning in our general release notes or on the developer release notes?
This is currently nightly only to provide some baking time due to how close to the end of the cycle it landed; I think it probably will make sense to relnote it in the bug to let it ride the trains (which should be in the next week or two).
Comment 29•3 months ago
|
||
We have release notes for the nightly channel.
Assignee | ||
Comment 30•3 months ago
|
||
(In reply to Pascal Chevrel:pascalc from comment #29)
We have release notes for the nightly channel.
Oh, then yes, it's worth noting.
Comment 31•3 months ago
|
||
Could you nominate it directly in Bugzilla? Thanks
https://wiki.mozilla.org/Release_Management/Release_Notes_Nomination#Nomination_in_Bugzilla
Comment 32•3 months ago
|
||
Backed out as requested by Aryx for causing perma mochitest failures at test_file_blob_upload.html
Assignee | ||
Comment 33•3 months ago
|
||
My comment on the permafail is at https://bugzilla.mozilla.org/show_bug.cgi?id=1903711#c11 where I note the permafail does differ from the prior intermittent failures there on that bug, so I'll be focusing comments here on this bug since it had the bulk of the patches.
I was not able to reproduce locally in my initial attempts but pernosco's self-serve automation got me this trace that I'm now looking at: https://pernos.co/debug/XCHiw6x5w1W2HVJZvoDBpQ/index.html
Comment 34•3 months ago
|
||
Backout merged to central: https://hg.mozilla.org/mozilla-central/rev/543c21c8e352
Assignee | ||
Updated•3 months ago
|
Assignee | ||
Updated•3 months ago
|
Assignee | ||
Comment 35•3 months ago
|
||
The tl;dr here seems to be:
- The use of the socket server process is what is key to experiencing this bug.
- That's because DecryptingInputStream's IPC (de)serialization logic only works if the underlying FD is at offset 0 in the file.
- If you call DecryptingInputStream::Available the underlying file descriptor will end up at the end of the file. The internal buffer state is okay and consistent with the file position, but only as long as the stream is never sent over IPC. When serialized, only the key and block size are serialized beyond the file stream serialization (fd, flags). So the DecryptingInputStream will be in a bad state after deserialization and a read will return zero bytes unless one explicitly seeks the stream back to a 0 offset. But callers reasonably assume that calling Available() won't destructively seek a stream.
So the pernosco trace from comment 33 shows this sequence of events:
- HttpBaseChannel::InternalSetUploadStream initiates the async Available() call on the STS
- On the STS thread:
- The FD is opened by the file stream's deferred open mechanism
- The last seek that ever happens on the file stream in the process by DecryptingInputStream as part of Available()
- DecryptingInputStream::ParseNextChunk calls ReadAll which reads the full 4k chunk, putting us at the EOF. Note that this does not trigger the auto-EOF close because we don't try and read at this point.
- Serialize and then Deserialize happens as the upload stream gets sent to the socket process.
- In the socket process we end up performing 2 reads as ParseNextChunk gets asked to figure out what's going on. Both times we end up trying to read 4k and getting 0 bytes. We never attempt to seek the file stream in this process:
- The first time is on the main thread where the mis-named NS_InputStreamIsBuffered is asking the stream to figure out if the stream supports ReadSegments (which is bad and I will be providing a fix in a separate bug; will edit this comment to insert bug reference here).
- The second time is here on the socket thread where we actually want to be sending the request body. Note that it doesn't seem amazing that we're synchronously reading from the file stream on the socket thread either; this seems like an awkward emergent property of this check on the upload stream that really wants Available to be reliable in the trace (alt: searchfox line). But I think that may be a somewhat known issue; I'll try and find the relevant necko bug about that and edit that in here, or file one otherwise.
Assignee | ||
Comment 36•3 months ago
|
||
In cases where Available() is called on a DecryptingInputStream in one process
and then serialized and deserialized to another process, if the underlying
stream retains its position, then the deserialized DecryptingInputStream will
be in a broken state because the serialization does not propagate the internal
state that makes the altered stream position okay.
This is because when restoring state with its final restorative Seek(), the
call to DecryptingInputStream::Seek will invoke ParseNextChunk() which will
perform a read (if possible).
This fix consists of a minor change to seek the underlying stream to its
start in EnsureBuffers which will always be called exactly once before we do
any I/O, and will only happen when we were going to do I/O anyways, so this
does not induce any I/O when it would not have happened. If the fix is
commented out, the test will fail permutations starting with
DOM_Quota_EncryptedStream_Parametrized/ParametrizedCryptTest.DummyCipherStrategy_SerializeRoundtrip
.
The rest of the patch consists of test changes in order to test serialization
and ensure that we also have coverage for not calling the side-effect incuding
Available() method prior to reads (in order to be consistent with what we see
with http request bodies when the socket process is in use).
In order to stick with the TestEncryptedStream.cpp GTest not using a file input
stream, DecryptingInputStream needed to have its serialization logic
generalized to support other serializable streams in the test.
Comment 37•2 months ago
|
||
Comment 38•2 months ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/c59e913dd99b
https://hg.mozilla.org/mozilla-central/rev/04b80d38b2f1
https://hg.mozilla.org/mozilla-central/rev/db68d7bf4699
https://hg.mozilla.org/mozilla-central/rev/9249b541544a
https://hg.mozilla.org/mozilla-central/rev/28970a9541d1
Comment 39•2 months ago
|
||
:asuth did you want to mention this in the 138 release notes? Please nominate if so.
Comment 40•2 months ago
|
||
This is currently @IS_NIGHTLY_BUILD@. The feature landed in the 138 cycle, but it currently isn't on track to ship in 138
Do we need a separate bug to track a future "let it ride the trains" change? Or is that why bug 1266409 was left open instead of being duped to this bug?
Assignee | ||
Comment 41•2 months ago
|
||
(In reply to Daniel Veditz [:dveditz] from comment #40)
This is currently @IS_NIGHTLY_BUILD@. The feature landed in the 138 cycle, but it currently isn't on track to ship in 138
Do we need a separate bug to track a future "let it ride the trains" change? Or is that why bug 1266409 was left open instead of being duped to this bug?
I've filed bug 1959535 to track enabling in release; this should ideally happen in 138.
I'll dupe bug 1266409 to this bug for clarity, thanks.
Assignee | ||
Comment 43•2 months ago
|
||
Release Note Request (optional, but appreciated)
[Why is this notable]: More APIs working in Private Browsing Mode.
[Affects Firefox for Android]: Yes
[Suggested wording]: Service Workers are now available in Private Browsing Mode. Building on prior work supporting IndexedDB and the DOM Cache API in Private Browsing Mode through use of encrypted storage, sites that benefit from Service Workers will be able to use them.
[Links (documentation, blog post, etc)]: None currently.
Comment 44•2 months ago
•
|
||
Added to Nightly Release notes with slightly altered wording
Starting with Firefox 138, Service Workers are now available in Private Browsing Mode in Nightly builds. This enhancement builds on our efforts to support IndexedDB and the DOM Cache API in Private Browsing through encrypted storage. With this change, more websites, especially those that rely on background tasks, will be able to benefit from Service workers.
When bug 1959535 goes live, a relnote can be requested there and we will convert the nightly note to a beta/release note.
Comment 45•1 month ago
•
|
||
Removing from the Nightly only release notes for Fx140+
This was enabled by default for Fx139 in Bug 1959535
Description
•